首页> 外文OA文献 >Online Object Tracking, Learning and Parsing with And-Or Graphs
【2h】

Online Object Tracking, Learning and Parsing with And-Or Graphs

机译:在线对象跟踪,学习和用和或图解析

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

This paper presents a method, called AOGTracker, for simultaneously tracking,learning and parsing (TLP) of unknown objects in video sequences with ahierarchical and compositional And-Or graph (AOG) representation. %The AOGcaptures both structural and appearance variations of a target object in aprincipled way. The TLP method is formulated in the Bayesian framework with aspatial and a temporal dynamic programming (DP) algorithms inferring objectbounding boxes on-the-fly. During online learning, the AOG is discriminativelylearned using latent SVM to account for appearance (e.g., lighting and partialocclusion) and structural (e.g., different poses and viewpoints) variations ofa tracked object, as well as distractors (e.g., similar objects) in background.Three key issues in online inference and learning are addressed: (i)maintaining purity of positive and negative examples collected online, (ii)controling model complexity in latent structure learning, and (iii) identifyingcritical moments to re-learn the structure of AOG based on its intrackability.The intrackability measures uncertainty of an AOG based on its score maps in aframe. In experiments, our AOGTracker is tested on two popular trackingbenchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks,and the VOT benchmarks --- VOT 2013, 2014, 2015 and TIR2015 (thermal imagerytracking). In the former, our AOGTracker outperforms state-of-the-art trackingalgorithms including two trackers based on deep convolutional network. In thelatter, our AOGTracker outperforms all other trackers in VOT2013 and iscomparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.
机译:本文提出了一种称为AOGTracker的方法,该方法可以同时跟踪,学习和解析具有序列化和组成与-或图(AOG)表示的视频序列中的未知对象。 AOG以原则方式捕获目标对象的结构和外观变化。 TLP方法是在贝叶斯框架中制定的,采用了空间和动态动态编程(DP)算法,可以即时推断出对象边界框。在在线学习期间,使用潜在的SVM判别学习AOG,以说明跟踪对象以及背景中的干扰物(例如类似对象)的外观(例如,照明和部分遮挡)和结构(例如,不同的姿势和视点)的变化。解决了在线推理和学习中的三个关键问题:(i)保持在线收集的正例和负例的纯度,(ii)控制潜在结构学习中的模型复杂性,以及(iii)识别关键时刻以重新学习基于AOG的结构可跟踪性基于框架中的得分图来衡量AOG的不确定性。在实验中,我们的AOGTracker在具有相同参数设置的两个流行跟踪基准上进行了测试:TB-100 / 50 / CVPR2013基准,以及VOT基准-VOT 2013、2014、2015和TIR2015(热成像跟踪)。在前者中,我们的AOGTracker优于包括两个基于深度卷积网络的跟踪器的最新跟踪算法。最后,我们的AOGTracker优于VOT2013中的所有其他跟踪器,可与VOT2014、2015和TIR2015中的最新方法相提并论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号